86 research outputs found

    Failure of a patient-centered intervention to substantially increase the identification and referral for-treatment of ambulatory emergency department patients with occult psychiatric conditions: a randomized trial [ISRCTN61514736]

    Get PDF
    BACKGROUND: We previously demonstrated that a computerized psychiatric screening interview (the PRIME-MD) can be used in the Emergency Department (ED) waiting room to identify patients with mental illness. In that trial, however, informing the ED physician of the PRIME-MD results did not increase the frequency of psychiatric diagnosis, consultation or referral. We conducted this study to determine whether telling the patient and physician the PRIME-MD result would result in the majority of PRIME-MD-diagnosed patients being directed toward treatment for their mental illness. METHODS: In this single-site RCT, consenting patients with non-specific somatic chief complaints (e.g., fatigue, back pain, etc.) completed the computerized PRIME-MD in the waiting room and were randomly assigned to one of three groups: patient and physician told PRIME-MD results, patient told PRIME-MD results, and neither told PRIME-MD results. The main outcome measure was the percentage of patients with a PRIME-MD diagnosis who received a psychiatric consultation or referral from the ED. RESULTS: 183 (5% of all ED patients) were approached. 123 eligible patients consented to participate, completed the PRIME-MD and were randomized. 95 patients had outcomes recorded. 51 (54%) had a PRIME-MD diagnosis and 8 (16%) of them were given a psychiatric consultation or referral in the ED. While the frequency of consultation or referral increased as the intervention's intensity increased (tell neither = 11% (1/9), tell patient 15% (3/20), tell patient and physician 18% (4/22)), no group came close to the 50% threshold we sought. For this reason, we stopped the trial after an interim analysis. CONCLUSION: Patients willingly completed the PRIME-MD and 54% had a PRIME-MD diagnosis. Unfortunately, at our institution, informing the patient (and physician) of the PRIME-MD results infrequently led to the patient being directed toward care for their psychiatric condition

    Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available.</p> <p>Findings</p> <p>We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots.</p> <p>Conclusions</p> <p>It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.</p

    Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Electronic guideline-based decision support systems have been suggested to successfully deliver the knowledge embedded in clinical practice guidelines. A number of studies have already shown positive findings for decision support systems such as drug-dosing systems and computer-generated reminder systems for preventive care services.</p> <p>Methods</p> <p>A systematic literature search (1990 to December 2008) of the English literature indexed in the Medline database, Embase, the Cochrane Central Register of Controlled Trials, and CRD (DARE, HTA and NHS EED databases) was conducted to identify evaluation studies of electronic multi-step guideline implementation systems in ambulatory care settings. Important inclusion criterions were the multidimensionality of the guideline (the guideline needed to consist of several aspects or steps) and real-time interaction with the system during consultation. Clinical decision support systems such as one-time reminders for preventive care for which positive findings were shown in earlier reviews were excluded. Two comparisons were considered: electronic multidimensional guidelines versus usual care (comparison one) and electronic multidimensional guidelines versus other guideline implementation methods (comparison two).</p> <p>Results</p> <p>Twenty-seven publications were selected for analysis in this systematic review. Most designs were cluster randomized controlled trials investigating process outcomes more than patient outcomes. With success defined as at least 50% of the outcome variables being significant, none of the studies were successful in improving patient outcomes. Only seven of seventeen studies that investigated process outcomes showed improvements in process of care variables compared with the usual care group (comparison one). No incremental effect of the electronic implementation over the distribution of paper versions of the guideline was found, neither for the patient outcomes nor for the process outcomes (comparison two).</p> <p>Conclusions</p> <p>There is little evidence at the moment for the effectiveness of an increasingly used and commercialised instrument such as electronic multidimensional guidelines. After more than a decade of development of numerous electronic systems, research on the most effective implementation strategy for this kind of guideline-based decision support systems is still lacking. This conclusion implies a considerable risk towards inappropriate investments in ineffective implementation interventions and in suboptimal care.</p

    Selection and Presentation of Imaging Figures in the Medical Literature

    Get PDF
    Background: Images are important for conveying information, but there is no empirical evidence on whether imaging figures are properly selected and presented in the published medical literature. We therefore evaluated the selection and presentation of radiological imaging figures in major medical journals. Methodology/Principal Findings: We analyzed articles published in 2005 in 12 major general and specialty medical journals that had radiological imaging figures. For each figure, we recorded information on selection, study population, provision of quantitative measurements, color scales and contrast use. Overall, 417 images from 212 articles were analyzed. Any comment/hint on image selection was made in 44 (11%) images (range 0–50% across the 12 journals) and another 37 (9%) (range 0–60%) showed both a normal and abnormal appearance. In 108 images (26%) (range 0–43%) it was unclear whether the image came from the presented study population. Eighty-three images (20%) (range 0–60%) had any quantitative or ordered categorical value on a measure of interest. Information on the distribution of the measure of interest in the study population was given in 59 cases. For 43 images (range 0–40%), a quantitative measurement was provided for the depicted case and the distribution of values in the study population was also available; in those 43 cases there was no over-representation of extreme than average cases (p = 0.37). Significance: The selection and presentation of images in the medical literature is often insufficiently documented; quantitative data are sparse and difficult to place in context

    Reporting of Human Genome Epidemiology (HuGE) association studies: An empirical assessment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Several thousand human genome epidemiology association studies are published every year investigating the relationship between common genetic variants and diverse phenotypes. Transparent reporting of study methods and results allows readers to better assess the validity of study findings. Here, we document reporting practices of human genome epidemiology studies.</p> <p>Methods</p> <p>Articles were randomly selected from a continuously updated database of human genome epidemiology association studies to be representative of genetic epidemiology literature. The main analysis evaluated 315 articles published in 2001–2003. For a comparative update, we evaluated 28 more recent articles published in 2006, focusing on issues that were poorly reported in 2001–2003.</p> <p>Results</p> <p>During both time periods, most studies comprised relatively small study populations and examined one or more genetic variants within a single gene. Articles were inconsistent in reporting the data needed to assess selection bias and the methods used to minimize misclassification (of the genotype, outcome, and environmental exposure) or to identify population stratification. Statistical power, the use of unrelated study participants, and the use of replicate samples were reported more often in articles published during 2006 when compared with the earlier sample.</p> <p>Conclusion</p> <p>We conclude that many items needed to assess error and bias in human genome epidemiology association studies are not consistently reported. Although some improvements were seen over time, reporting guidelines and online supplemental material may help enhance the transparency of this literature.</p

    Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial

    Get PDF
    BACKGROUND: Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both. METHODOLOGY AND PRINCIPAL FINDINGS: Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with “no statistical expert” and “no checklist” as controls. The two interventions were crossed in a 2×2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6–24.4). The effect of suggesting a guideline to the reviewers had no effect on change in overall quality as measured by the Goodman scale (0.9, 95% CI: −0.3–+2.1). The estimated effect of adding a statistical reviewer was 5.5 (95% CI: 4.3–6.7), showing a significant improvement in quality. CONCLUSIONS AND SIGNIFICANCE: This prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines

    The effectiveness of computerized clinical guidelines in the process of care: a systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical practice guidelines have been developed aiming to improve the quality of care. The implementation of the computerized clinical guidelines (CCG) has been supported by the development of computerized clinical decision support systems.</p> <p>This systematic review assesses the impact of CCG on the process of care compared with non-computerized clinical guidelines.</p> <p>Methods</p> <p>Specific features of CCG were studied through an extensive search of scientific literature, querying electronic databases: Pubmed/Medline, Embase and Cochrane Controlled Trials Register. A multivariable logistic regression was carried out to evaluate the association of CCG's features with positive effect on the process of care.</p> <p>Results</p> <p>Forty-five articles were selected. The logistic model showed that Automatic provision of recommendation in electronic version as part of clinician workflow (Odds Ratio [OR]= 17.5; 95% confidence interval [CI]: 1.6-193.7) and Publication Year (OR = 6.7; 95%CI: 1.3-34.3) were statistically significant predictors.</p> <p>Conclusions</p> <p>From the research that has been carried out, we can conclude that after implementation of CCG significant improvements in process of care are shown. Our findings also suggest clinicians, managers and other health care decision makers which features of CCG might improve the structure of computerized system.</p

    Promoting Patient Safety and Preventing Medical Error in Emergency Departments

    Full text link
    An estimated 108,000 people die each year from potentially preventable iatrogenic injury. One in 50 hospitalized patients experiences a preventable adverse event. Up to 3% of these injuries and events take place in emergency departments. With long and detailed training, morbidity and mortality conferences, and an emphasis on practitioner responsibility, medicine has traditionally faced the challenges of medical error and patient safety through an approach focused almost exclusively on individual practitioners. Yet no matter how well trained and how careful health care providers are, individuals will make mistakes because they are human. In general medicine, the study of adverse drug events has led the way to new methods of error detection and error prevention. A combination of chart reviews, incident logs, observation, and peer solicitation has provided a quantitative tool to demonstrate the effectiveness of interventions such as computer order entry and pharmacist order review. In emergency medicine (EM), error detection has focused on subjects of high liability: missed myocardial infarctions, missed appendicitis, and misreading of radiographs. Some system-level efforts in error prevention have focused on teamwork, on strengthening communication between pharmacists and emergency physicians, on automating drug dosing and distribution, and on rationalizing shifts. This article reviews the definitions, detection, and presentation of error in medicine and EM. Based on review of the current literature, recommendations are offered to enhance the likelihood of reduction of error in EM practice.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/74930/1/j.1553-2712.2000.tb00466.x.pd

    The performance of the World Rugby Head Injury Assessment Screening Tool: a diagnostic accuracy study

    Get PDF
    Abstract Background Off-field screening tools, such as the Sports Concussion Assessment Tool (SCAT), have been recommended to identify possible concussion following a head impact where the consequences are unclear. However, real-life performance, and diagnostic accuracy of constituent sub-tests, have not been well characterized. Methods A retrospective cohort study was performed in elite Rugby Union competitions between September 2015 and June 2018. The study population comprised consecutive players identified with a head impact event undergoing off-field assessments with the World Rugby Head Injury Assessment (HIA01) screening tool, an abridged version of the SCAT3. Off-field screening performance was investigated by evaluating real-life removal-from-play outcomes and determining the theoretical diagnostic accuracy of the HIA01 tool, and individual sub-tests, if player-specific baseline or normative sub-test thresholds were strictly applied. The reference standard was clinically diagnosed concussion determined by serial medical assessments. Results One thousand one hundred eighteen head impacts events requiring off-field assessments were identified, resulting in 448 concussions. Real-life removal-from-play decisions demonstrated a sensitivity of 76.8% (95% CI 72.6–80.6) and a specificity of 86.6% (95% CI 83.7–89.1) for concussion (AUROC 0.82, 95% CI 0.79–0.84). Theoretical HIA01 tool performance worsened if pre-season baseline values (sensitivity 89.6%, specificity 33.9%, AUROC 0.62, p < 0.01) or normative thresholds (sensitivity 80.4%, specificity 69.0%, AUROC 0.75, p < 0.01) were strictly applied. Symptoms and clinical signs were the HIA01 screening tool sub-tests most predictive for concussion; with immediate memory and tandem gait providing little additional diagnostic value. Conclusions These findings support expert recommendations that clinical judgement should be used in the assessment of athletes following head impact events. Substitution of the tandem gait and 5-word immediate memory sub-tests with alternative modes could potentially improve screening tool performance
    • …
    corecore